421 research outputs found

    Facility Location in Evolving Metrics

    Get PDF
    Understanding the dynamics of evolving social or infrastructure networks is a challenge in applied areas such as epidemiology, viral marketing, or urban planning. During the past decade, data has been collected on such networks but has yet to be fully analyzed. We propose to use information on the dynamics of the data to find stable partitions of the network into groups. For that purpose, we introduce a time-dependent, dynamic version of the facility location problem, that includes a switching cost when a client's assignment changes from one facility to another. This might provide a better representation of an evolving network, emphasizing the abrupt change of relationships between subjects rather than the continuous evolution of the underlying network. We show that in realistic examples this model yields indeed better fitting solutions than optimizing every snapshot independently. We present an O(log⁥nT)O(\log nT)-approximation algorithm and a matching hardness result, where nn is the number of clients and TT the number of time steps. We also give an other algorithms with approximation ratio O(log⁥nT)O(\log nT) for the variant where one pays at each time step (leasing) for each open facility

    Enumerating Subgraph Instances Using Map-Reduce

    Full text link
    The theme of this paper is how to find all instances of a given "sample" graph in a larger "data graph," using a single round of map-reduce. For the simplest sample graph, the triangle, we improve upon the best known such algorithm. We then examine the general case, considering both the communication cost between mappers and reducers and the total computation cost at the reducers. To minimize communication cost, we exploit the techniques of (Afrati and Ullman, TKDE 2011)for computing multiway joins (evaluating conjunctive queries) in a single map-reduce round. Several methods are shown for translating sample graphs into a union of conjunctive queries with as few queries as possible. We also address the matter of optimizing computation cost. Many serial algorithms are shown to be "convertible," in the sense that it is possible to partition the data graph, explore each partition in a separate reducer, and have the total computation cost at the reducers be of the same order as the computation cost of the serial algorithm.Comment: 37 page

    The Power of Verification for Greedy Mechanism Design

    Get PDF
    Greedy algorithms are known to provide, in polynomial time, near optimal approximation guarantees for Combinatorial Auctions (CAs) with multidimensional bidders. It is known that truthful greedy-like mechanisms for CAs with multi-minded bidders do not achieve good approximation guarantees. In this work, we seek a deeper understanding of greedy mechanism design and investigate under which general assumptions, we can have efficient and truthful greedy mechanisms for CAs. Towards this goal, we use the framework of priority algorithms and weak and strong verification, where the bidders are not allowed to overbid on their winning set or on any subset of this set, respectively. We provide a complete characterization of the power of weak verification showing that it is sufficient and necessary for any greedy fixed priority algorithm to become truthful with the use of money or not, depending on the ordering of the bids. Moreover, we show that strong verification is sufficient and necessary to obtain a 2-approximate truthful mechanism with money, based on a known greedy algorithm, for the problem of submodular CAs in finite bidding domains. Our proof is based on an interesting structural analysis of the strongly connected components of the declaration graph

    The Power of Verification for Greedy Mechanism Design

    Get PDF
    Greedy algorithms are known to provide near optimal approximation guarantees for Combinatorial Auctions (CAs) with multidimensional bidders, ignoring incentive compatibility. Borodin and Lucier [5] however proved that truthful greedy-like mechanisms for CAs with multi-minded bidders do not achieve good approximation guarantees. In this work, we seek a deeper understanding of greedy mechanism design and investigate under which general assumptions, we can have efficient and truthful greedy mechanisms for CAs. Towards this goal, we use the framework of priority algorithms and weak and strong verification, where the bidders are not allowed to overbid on their winning set or on any subsets of this set, respectively. We provide a complete characterization of the power of weak verification showing that it is sufficient and necessary for any greedy fixed priority algorithm to become truthful with the use of money or not, depending on the ordering of the bids. Moreover, we show that strong verification is sufficient and necessary for the greedy algorithm of [20], which is 2-approximate for submodular CAs, to become truthful with money in finite bidding domains. Our proof is based on an interesting structural analysis of the strongly connected components of the declaration graph

    Scheduling MapReduce Jobs under Multi-Round Precedences

    Full text link
    We consider non-preemptive scheduling of MapReduce jobs with multiple tasks in the practical scenario where each job requires several map-reduce rounds. We seek to minimize the average weighted completion time and consider scheduling on identical and unrelated parallel processors. For identical processors, we present LP-based O(1)-approximation algorithms. For unrelated processors, the approximation ratio naturally depends on the maximum number of rounds of any job. Since the number of rounds per job in typical MapReduce algorithms is a small constant, our scheduling algorithms achieve a small approximation ratio in practice. For the single-round case, we substantially improve on previously best known approximation guarantees for both identical and unrelated processors. Moreover, we conduct an experimental analysis and compare the performance of our algorithms against a fast heuristic and a lower bound on the optimal solution, thus demonstrating their promising practical performance

    Dynamics of ripple formation on silicon surfaces by ultrashort laser pulses in sub-ablation conditions

    Full text link
    An investigation of ultrashort pulsed laser induced surface modification due to conditions that result in a superheated melted liquid layer and material evaporation are considered. To describe the surface modification occurring after cooling and resolidification of the melted layer and understand the underlying physical fundamental mechanisms, a unified model is presented to account for crater and subwavelength ripple formation based on a synergy of electron excitation and capillary waves solidification. The proposed theoretical framework aims to address the laser-material interaction in sub-ablation conditions and thus minimal mass removal in combination with a hydrodynamics-based scenario of the crater creation and ripple formation following surface irradiation with single and multiple pulses, respectively. The development of the periodic structures is attributed to the interference of the incident wave with a surface plasmon wave. Details of the surface morphology attained are elaborated as a function of the imposed conditions and results are tested against experimental data

    k-L(2,1)-Labelling for Planar Graphs is NP-Complete for k >= 4

    Get PDF
    A mapping from the vertex set of a graph G = (V,E) into an interval of integers {0,...,k} is an L(2,1)-labelling of G of span k if any two adjacent vertices are mapped onto integers that are at least 2 apart, and every two vertices with a common neighbour are mapped onto distinct integers. It is known that for any fixed k >= 4, deciding the existence of such a labelling is an NP-complete problem while it is polynomial for k = 8, it remains NP-complete when restricted to planar graphs. In this paper, we show that it remains NP-complete for any k >= 4 by reduction from Planar Cubic Two-Colourable Perfect Matching. Schaefer stated without proof that Planar Cubic Two-Colourable Perfect Matching is NP-complete. In this paper we give a proof of this.Comment: 16 pages, includes figures generated using PSTricks. To appear in Discrete Applied Mathematics. Some very minor corrections incorporate
    • 

    corecore